114 research outputs found
Robust Multi-Image HDR Reconstruction for the Modulo Camera
Photographing scenes with high dynamic range (HDR) poses great challenges to
consumer cameras with their limited sensor bit depth. To address this, Zhao et
al. recently proposed a novel sensor concept - the modulo camera - which
captures the least significant bits of the recorded scene instead of going into
saturation. Similar to conventional pipelines, HDR images can be reconstructed
from multiple exposures, but significantly fewer images are needed than with a
typical saturating sensor. While the concept is appealing, we show that the
original reconstruction approach assumes noise-free measurements and quickly
breaks down otherwise. To address this, we propose a novel reconstruction
algorithm that is robust to image noise and produces significantly fewer
artifacts. We theoretically analyze correctness as well as limitations, and
show that our approach significantly outperforms the baseline on real data.Comment: to appear at the 39th German Conference on Pattern Recognition (GCPR)
201
Modeling Camera Effects to Improve Visual Learning from Synthetic Data
Recent work has focused on generating synthetic imagery to increase the size
and variability of training data for learning visual tasks in urban scenes.
This includes increasing the occurrence of occlusions or varying environmental
and weather effects. However, few have addressed modeling variation in the
sensor domain. Sensor effects can degrade real images, limiting
generalizability of network performance on visual tasks trained on synthetic
data and tested in real environments. This paper proposes an efficient,
automatic, physically-based augmentation pipeline to vary sensor effects
--chromatic aberration, blur, exposure, noise, and color cast-- for synthetic
imagery. In particular, this paper illustrates that augmenting synthetic
training datasets with the proposed pipeline reduces the domain gap between
synthetic and real domains for the task of object detection in urban driving
scenes
Chlorobis(naphthalen-1-yl)phosphane
In the title compound, C20H14ClP, the dihedral angle between the naphthyl rings is 81.77 (6)°. The crystal packing suggests weak π–π stacking interactions between the naphthyl rings in adjacent units [minimum ring centroid separation 3.7625 (13) Å]
Improving Blind Spot Denoising for Microscopy
Many microscopy applications are limited by the total amount of usable light
and are consequently challenged by the resulting levels of noise in the
acquired images. This problem is often addressed via (supervised) deep learning
based denoising. Recently, by making assumptions about the noise statistics,
self-supervised methods have emerged. Such methods are trained directly on the
images that are to be denoised and do not require additional paired training
data. While achieving remarkable results, self-supervised methods can produce
high-frequency artifacts and achieve inferior results compared to supervised
approaches. Here we present a novel way to improve the quality of
self-supervised denoising. Considering that light microscopy images are usually
diffraction-limited, we propose to include this knowledge in the denoising
process. We assume the clean image to be the result of a convolution with a
point spread function (PSF) and explicitly include this operation at the end of
our neural network. As a consequence, we are able to eliminate high-frequency
artifacts and achieve self-supervised results that are very close to the ones
achieved with traditional supervised methods.Comment: 15 pages, 4 figure
Deep Burst Denoising
Noise is an inherent issue of low-light image capture, one which is
exacerbated on mobile devices due to their narrow apertures and small sensors.
One strategy for mitigating noise in a low-light situation is to increase the
shutter time of the camera, thus allowing each photosite to integrate more
light and decrease noise variance. However, there are two downsides of long
exposures: (a) bright regions can exceed the sensor range, and (b) camera and
scene motion will result in blurred images. Another way of gathering more light
is to capture multiple short (thus noisy) frames in a "burst" and intelligently
integrate the content, thus avoiding the above downsides. In this paper, we use
the burst-capture strategy and implement the intelligent integration via a
recurrent fully convolutional deep neural net (CNN). We build our novel,
multiframe architecture to be a simple addition to any single frame denoising
model, and design to handle an arbitrary number of noisy input frames. We show
that it achieves state of the art denoising results on our burst dataset,
improving on the best published multi-frame techniques, such as VBM4D and
FlexISP. Finally, we explore other applications of image enhancement by
integrating content from multiple frames and demonstrate that our DNN
architecture generalizes well to image super-resolution
A multi-wavelength polarimetric study of the blazar CTA 102 during a Gamma-ray flare in 2012
We perform a multi-wavelength polarimetric study of the quasar CTA 102 during
an extraordinarily bright -ray outburst detected by the {\it Fermi}
Large Area Telescope in September-October 2012 when the source reached a flux
of F photons cm s.
At the same time the source displayed an unprecedented optical and NIR
outburst. We study the evolution of the parsec scale jet with ultra-high
angular resolution through a sequence of 80 total and polarized intensity Very
Long Baseline Array images at 43 GHz, covering the observing period from June
2007 to June 2014. We find that the -ray outburst is coincident with
flares at all the other frequencies and is related to the passage of a new
superluminal knot through the radio core. The powerful -ray emission is
associated with a change in direction of the jet, which became oriented more
closely to our line of sight (1.2) during the ejection of
the knot and the -ray outburst. During the flare, the optical polarized
emission displays intra-day variability and a clear clockwise rotation of
EVPAs, which we associate with the path followed by the knot as it moves along
helical magnetic field lines, although a random walk of the EVPA caused by a
turbulent magnetic field cannot be ruled out. We locate the -ray
outburst a short distance downstream of the radio core, parsecs from the black
hole. This suggests that synchrotron self-Compton scattering of near-infrared
to ultraviolet photons is the probable mechanism for the -ray
production.Comment: Accepted for publication in The Astrophysical Journa
End-to-end Interpretable Learning of Non-blind Image Deblurring
Non-blind image deblurring is typically formulated as a linear least-squares
problem regularized by natural priors on the corresponding sharp picture's
gradients, which can be solved, for example, using a half-quadratic splitting
method with Richardson fixed-point iterations for its least-squares updates and
a proximal operator for the auxiliary variable updates. We propose to
precondition the Richardson solver using approximate inverse filters of the
(known) blur and natural image prior kernels. Using convolutions instead of a
generic linear preconditioner allows extremely efficient parameter sharing
across the image, and leads to significant gains in accuracy and/or speed
compared to classical FFT and conjugate-gradient methods. More importantly, the
proposed architecture is easily adapted to learning both the preconditioner and
the proximal operator using CNN embeddings. This yields a simple and efficient
algorithm for non-blind image deblurring which is fully interpretable, can be
learned end to end, and whose accuracy matches or exceeds the state of the art,
quite significantly, in the non-uniform case.Comment: Accepted at ECCV2020 (poster
W2S: Microscopy Data with Joint Denoising and Super-Resolution for Widefield to SIM Mapping
In fluorescence microscopy live-cell imaging, there is a critical trade-off
between the signal-to-noise ratio and spatial resolution on one side, and the
integrity of the biological sample on the other side. To obtain clean
high-resolution (HR) images, one can either use microscopy techniques, such as
structured-illumination microscopy (SIM), or apply denoising and
super-resolution (SR) algorithms. However, the former option requires multiple
shots that can damage the samples, and although efficient deep learning based
algorithms exist for the latter option, no benchmark exists to evaluate these
algorithms on the joint denoising and SR (JDSR) tasks. To study JDSR on
microscopy data, we propose such a novel JDSR dataset, Widefield2SIM (W2S),
acquired using a conventional fluorescence widefield and SIM imaging. W2S
includes 144,000 real fluorescence microscopy images, resulting in a total of
360 sets of images. A set is comprised of noisy low-resolution (LR) widefield
images with different noise levels, a noise-free LR image, and a corresponding
high-quality HR SIM image. W2S allows us to benchmark the combinations of 6
denoising methods and 6 SR methods. We show that state-of-the-art SR networks
perform very poorly on noisy inputs. Our evaluation also reveals that applying
the best denoiser in terms of reconstruction error followed by the best SR
method does not necessarily yield the best final result. Both quantitative and
qualitative results show that SR networks are sensitive to noise and the
sequential application of denoising and SR algorithms is sub-optimal. Lastly,
we demonstrate that SR networks retrained end-to-end for JDSR outperform any
combination of state-of-the-art deep denoising and SR networksComment: ECCVW 2020. Project page: \<https://github.com/ivrl/w2s
- …